February 23, 2019 John Loeffler posted an interesting article on interestingengineering.com. The post is called “Should We Fear Artificial Superintelligence?”, and in it, Loeffler argues that while Artificial Superintelligence (ASI) can be dangerous, one should mostly be optimistic about it. As a futurist, I am concerned about the possibility of (near-term) human extinction, and I think ASI is one of the greatest dangers we face. Therefore, I appreciate it when people think about this, as much of humanity’s concern seems to go to relatively unimportant things. But while Loeffler and I are both optimistic, we are so for different reasons,…...
Should We Fear Artificial Superintelligence? A Response
3 min read